Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Hybrid particle swarm optimization with multi-region sampling strategy to solve multi-objective flexible job-shop scheduling problem
ZHANG Wenqiang, XING Zheng, YANG Weidong
Journal of Computer Applications    2021, 41 (8): 2249-2257.   DOI: 10.11772/j.issn.1001-9081.2020101675
Abstract351)      PDF (1458KB)(406)       Save
Flexible Job-shop Scheduling Problem (FJSP) is a widely applied combinatorial optimization problem. Aiming at the problems of multi-objective FJSP that the solution process is complex and the algorithm is easy to fall into the local optimum, a Hybrid Particle Swarm Optimization algorithm with Multi-Region Sampling strategy (HPSO-MRS) was proposed to optimize both the makespan and total machine delay time. The multi-region sampling strategy was able to reorganize the positions of the Pareto frontiers that the particles belonging to, and guide the corresponding moving directions for the particles in multiple regions of the Pareto frontiers after sampling. Thus, the convergence ability of particles in multiple directions was adjusted, and the ability of uniform distribution was improved to a certain extent. In addition, in the encoding and decoding aspect, the decoding strategy with interpolation mechanism was used to eliminate the potential local left shift; in the particle updating aspect, the particle update method of traditional Particle Swarm Optimization (PSO) algorithm was combined with the crossover and mutation operators of Genetic Algorithm (GA), which improved the diversity of search process and avoid the algorithm from falling into the local optimum. The proposed algorithm was tested on benchmark problems Mk01-Mk10 and compared with Hybrid Particle Swarm Optimization algorithm (HPSO), Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ), Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) on algorithm effectiveness and operation efficiency. Experimental results of significance analysis showed that, HPSO-MRS was significantly better than the comparison algorithms on the convergence evaluation indexes Hyper Volume (HV) and Inverted Generational Distance (IGD) in 85% and 77.5% of the control groups, respectively. In 35% of the control groups, the distribution index Spacing of the algorithm was significantly better than those of the comparison algorithms. And there was no situation that the proposed algorithm was significantly worse than the comparison algorithms on the three indexes. It can be seen that, compared with the others, the proposed algorithm has better convergence and distribution performance.
Reference | Related Articles | Metrics
Task allocation optimization for automated guided vehicles based on variable neighborhood simulated annealing algorithm
YANG Wei, LI Ran, ZHANG Kun
Journal of Computer Applications    2021, 41 (10): 3056-3062.   DOI: 10.11772/j.issn.1001-9081.2020121919
Abstract301)      PDF (785KB)(237)       Save
In order to solve the task allocation problem of multi-Automated Guided Vehicle (AGV) storage system, a Variable Neighborhood_Simulated Annealing (VN_SA) algorithm was proposed. Firstly, according to the system operation process and operating characteristics of AGV, with the path cost, time cost and task equilibrium value cost of AGV during the task execution as the goals, and adding the power consumption situations of AGV driving with and without load to the constraints, a more practical multi-objective optimization model of task allocation for multi-AGV storage system was built. Then, aiming at the characteristics of the problem, a VN_SA algorithm was designed. The search range of the simulated annealing algorithm was expanded by the neighborhood perturbation operation in the algorithm, and the local optimum was jumped out by the algorithm and the global development effect was obtained by combining the probability mutation characteristics. The simulation experiments were carried out on works with the number of tasks of 20, 50, 100 respectively. Experimental results show that, the optimized total cost of the proposed algorithm is reduced by 6.4, 7.5 and 13.2 percentage points respectively compared with Genetic Algorithm (GA), which verifies the effectiveness of the proposed algorithm under different task sizes. It can be seen that the proposed algorithm has better convergence and search efficiency.
Reference | Related Articles | Metrics
Chinese-Vietnamese news topic discovery method based on cross-language neural topic model
YANG Weiya, YU Zhengtao, GAO Shengxiang, SONG Ran
Journal of Computer Applications    2021, 41 (10): 2879-2884.   DOI: 10.11772/j.issn.1001-9081.2020122054
Abstract317)      PDF (758KB)(187)       Save
In Chinese-Vietnamese cross-language news topic discovery task, the Chinese-Vietnamese parallel corpora are rare, it is difficult to train high-quality bilingual word embedding, and the news text is generally long, so that the method of bilingual word embedding is difficult to represent the text well. In order to solve the problems, a Chinese-Vietnamese news topic discovery method based on Cross-Language Neural Topic Model (CL-NTM) was proposed. In the method, the news topic information was used to represent news text, and the bilingual semantic alignment was converted into bilingual topic alignment tasks. Firstly, the neural topic models based on the variational autoencoder were trained in Chinese and Vietnamese respectively to obtain the monolingual abstract representations of the topics. Then, a small-scale parallel corpus was used to map the bilingual topics into the same semantic space. Finally, the K-means method was used to cluster the bilingual topic representations for finding the topics of news event clusters. Experimental results show that, compared with the Improved Chinese-English Latent Dirichlet Allocation model (ICE-LDA), the proposed method increases the Macro-F1 value and topic-coherence by 4 percentage points and 7 percentage points respectively, showing that the proposed method can effectively improve the clustering effect and topic interpretability of news topics.
Reference | Related Articles | Metrics
Performance analysis of wireless key generation with multi-bit quantization under imperfect channel estimation condition
DING Ning, GUAN Xinrong, YANG Weiwei, LI Tongkai, WANG Jianshe
Journal of Computer Applications    2020, 40 (1): 143-147.   DOI: 10.11772/j.issn.1001-9081.2019061004
Abstract346)      PDF (769KB)(261)       Save
The channel estimation error seriously affects the key generation consistency of two communicating parties in the process of wireless key generation, a multi-bit quantization wireless key generation scheme under imperfect channel estimation condition was proposed. Firstly, in order to investigate the influence of imperfect channel estimation on wireless key generation, a channel estimation error model was established. Then, a multi-bit key quantizer with guard band was designed, and the performance of wireless key was able to be improved by optimizing the quantization parameters. The closed-form expressions of Key Disagreement Rate (KDR) and Effective Key Generation Rate (EKGR) were derived, and the relationships between pilot signal power, quantization order, guard bands and the above two key generation performance indicators were revealed. The simulation results show that, increasing the transmit pilot power can effectively reduce the KDR, and with the increasing of quantization order, the key generation rate can be improved, but the KDR also increases. Moreover, increasing the quantization order and choosing the appropriate guard band size at the same time can effectively reduce KDR.
Reference | Related Articles | Metrics
Fine edge detection model based on RCF
JING Nianzhao, YANG Wei
Journal of Computer Applications    2019, 39 (9): 2535-2540.   DOI: 10.11772/j.issn.1001-9081.2019030462
Abstract1443)      PDF (970KB)(566)       Save

Aiming at the roughness and blur of edges generated by edge detection technology based on deep learning, an end-to-end fine edge detection model based on RCF (Richer Convolutional Features for edge detection) was proposed. In this model based on RCF model, attention mechanism was introduced in the backbone network, Squeeze-and-Excitation (SE) module was used to extract image edge features. In order to avoid excessive loss of detail information, two subsampling in the backbone network were removed. In order to increase the receptive field of the model, dilation convolution was used in the backbone. A residual module was used to fuse the edge images in different scales. The model was trained on the Berkeley Segmentation Data Set (BSDS500)and PASCAL VOC Context dataset by a multi-step training approach and was tested on the BSDS500. The experimental results show that the model improves the ODS (Optimal Dataset Scale) and OIS (Optimal Image Scale) to 0.817 and 0.838 respectively, and it not only generates finer edges without affecting real-time performance but also has better robustness.

Reference | Related Articles | Metrics
Quantitative analysis of physical secret key in OFDM channel based on universal software radio peripheral
DING Ning, GUAN Xinrong, YANG Weiwei
Journal of Computer Applications    2019, 39 (6): 1780-1785.   DOI: 10.11772/j.issn.1001-9081.2018102120
Abstract408)      PDF (990KB)(183)       Save
In order to compare and analyze the performance of single threshold quantization algorithm and double thresholds quantization algorithm on measured data and improve the performance of physical secret key by optimizing the quantization parameters, an Orthogonal Frequency Division Multiplexing (OFDM) system was built by Universal Software Radio Peripheral (USRP). The channel amplitude feature was extracted as the key source through channel estimation and the performance of the two quantization algorithms was analyzed in terms of consistency, randomness and residual length of secret key. The simulation results of consistency, randomness and residual length of secret key under single threshold quantization and double thresholds quantization were obtained based on measured data. The results show that single threshold quantization algorithm has the optimal quantization threshold to minimize the key inconsistency rate under the given key randomness constraint, double thresholds quantization algorithm has the optimal quantization factor to maximize the effective secret key length, and when Cascade key negotiation algorithm is used for negotiation, there is a trade-off relation between secret key consistency and secret key generation rate in different quantization algorithms.
Reference | Related Articles | Metrics
Pedestrian heading particle filter correction method with indoor environment constraints
LIU Pan, ZHANG Bang, HUANG Chao, YANG Weijun, XU Zhengyi
Journal of Computer Applications    2018, 38 (12): 3360-3366.   DOI: 10.11772/j.issn.1001-9081.2018040883
Abstract444)      PDF (1179KB)(519)       Save
In the traditional indoor pedestrian positioning algorithm based on dead reckoning and Kalman filtering, there is a problem of cumulative error in the heading angle, which makes the positional error continue to accumulate continuously. To solve this problem, a pedestrian heading particle filter algorithm with indoor environment constraints was proposed to correct direction error. Firstly, the indoor map information was abstracted into a structure represented by line segments, and the map data was dynamically integrated into the mechanism of particle compensation and weight allocation. Then, the heading self-correction mechanism was constructed through the correlation map data and the sample to be calibrated. Finally, the distance weighting mechanism was constructed through correlation map data and particle placement. In addition, the particle filter model was simplified, and heading was used as the only state variable to optimize. And while improving the positioning accuracy, the dimension of state vector was reduced, thereby the complexity of data analysis and processing was reduced. Through the integration of indoor environmental information, the proposed algorithm can effectively suppress the continuous accumulation of directional errors. The experimental results show that, compared with the traditional Kalman filter algorithm, the proposed algorithm can significantly improve the pedestrian positioning accuracy and stability. In the two-dimensional walking experiment with a distance of 435 m, the heading angle error is reduced from 15.3° to 0.9°, and the absolute error at the end position is reduced from 5.50 m to 0.87 m.
Reference | Related Articles | Metrics
Prediction of eight-class protein secondary structure based on deep learning
ZHANG Lei, LI Zheng, ZHENG Fengbin, YANG Wei
Journal of Computer Applications    2017, 37 (5): 1512-1515.   DOI: 10.11772/j.issn.1001-9081.2017.05.1512
Abstract1233)      PDF (644KB)(926)       Save
Predicting protein secondary structure is an important issue in structural biology. Aiming at the prediction of eight-class protein secondary structure, a novel deep learning prediction algorithm was proposed by combining recurrent neural network and feed-forward neural network. A bidirectional recurrent neural network was used to model locality and long-range interaction between amino acid residues in protein. In order to predict the eight-class protein secondary structure, the outputs of the hidden layer in the bidirectional recurrent neural network were further fed to the three-layer feed-forward neural network. Experimental results show that the proposed method achieves Q 8 accuracy of 67.9% on the CB513 dataset, which is significantly better than SSpro8 and SC-GSN (Supervised Convolutional-Generative Stochastic Network).
Reference | Related Articles | Metrics
Self-examples reconstruction of city street image from driving recorder
YANG Wei, XIE Weicheng, JIANG Wenbo, SHI Linyu
Journal of Computer Applications    2017, 37 (3): 817-822.   DOI: 10.11772/j.issn.1001-9081.2017.03.817
Abstract482)      PDF (1058KB)(469)       Save
In order to ensure the high speed of image display and storage in real-time, the image captured by the popular driving recorder usually shows a low resolution, which has a serious impact on effective image information acquisition under unexpected situation. To solve this problem, a perspective transformation based on self-examples of the images and high-frequency compensation were used to reconstruct the city street images with low resolution. Perspective transformation was added to the affine transformation to match image patches, match image patch and high frequency compensation was used to recover the lost high frequency information of each matched image patch when image pyramid was constructed. The image pyramid was searched by non-local multi-scale method to get the matched patches, which were synthesized to obtain the images of high resolution. Many low resolution street view images were used to verify the effectiveness of this algorithm. Compared it to existing typical algorithms such as ScSR (Sparse coding Super-Resolution), Upscaling, SCN (Sparse Coding based Network), the experimental results show that the algorithm in several blind evaluation indices is better than other algorithms and it can improve the image resolution while keeping the edges and details of the image.
Reference | Related Articles | Metrics
Very low resolution face recognition via super-resolution based on extreme learning machine
LU Tao, YANG Wei, WAN Yongjing
Journal of Computer Applications    2016, 36 (2): 580-585.   DOI: 10.11772/j.issn.1001-9081.2016.02.0580
Abstract517)      PDF (995KB)(1025)       Save
The very low-resolution image itself contains less discriminant information and is prone to be interfered by noise, which reduces the recognition rate of the existing face recognition algorithm. In order to solve this problem, a very low resolution face recognition algorithm via Super-Resolution (SR) based on Extreme Learning Machine (ELM) was proposed. Firstly, the sparse expression dictionary of Low-Resolution (LR) and High-Resolution (HR) images were learned from sample base, and the HR image could be reconstructed due to the manifold consistency of LR and HR expression coefficients. Secondly, the ELM model was built on the HR reconstructed images, the connection weight of feedforward neural networks was obtained by training. Lastly, the ELM was used to predict the category attribute of the very low-resolution image. Compared with traditional face recognition algorithm based on Collaborative Representation Classification (CRC), the experimental results show that the recognition rate of the proposed algorithm increases by 2% upon the reconstructed HR images. At the same time, it greatly shortens the recognition time. The simulation results show that the proposed algorithm can effectively solve face recognition problem caused by limited discriminant information in very low-resolution image and it has better recognition ability.
Reference | Related Articles | Metrics
Strength model of user relationship based on latent regression
HAN Zhongming, TAN Xusheng, CHEN Yan, YANG Weijie
Journal of Computer Applications    2016, 36 (2): 336-341.   DOI: 10.11772/j.issn.1001-9081.2016.02.0336
Abstract482)      PDF (1017KB)(1023)       Save
To effectively measure the strength of the directed relationship among the users in social network, based on the directed interaction frequency, a smooth model for computing the interaction strength of the user was proposed. Furthermore, user interaction strength was taken as dependent variable and user relationship strength was taken as latent variable, a latent regression model was constructed, and an Expectation-Maximization (EM) algorithm for parameter estimation of the latent regression model was given. Comprehensive experiments were conducted on two datasets extracted from Renren and Sina Weibo in the aspects of the best friends and the intensity ranking. On Renren dataset, the result of TOP-10 best friends chosen by the proposed model was compared with that of manual annotation, the mean of Normalized Discounted Cumulative Gain (NDCG) of the model was 69.48%, the average of Mean Average Precision (MAP) of the model was 66.3%, both of the parameters were significantly improved; on Sina Weibo dataset, the range of infection spread by nodes with higher relationship strength increased by 80% compared to the other nodes. The experimental results show that the proposed model can effectively measure user relationship strength.
Reference | Related Articles | Metrics
Regularized approach for incomplete robust principal component analysis and its applications in background modeling
SHI Jiarong, ZHENG Xiuyun, YANG Wei
Journal of Computer Applications    2015, 35 (10): 2824-2827.   DOI: 10.11772/j.issn.1001-9081.2015.10.2824
Abstract597)      PDF (782KB)(404)       Save
Because the existing Robust Principal Component Analysis (RPCA) approaches do not consider the continuity and the incompletion of sequential data, one type of low-rank matrix recovery model, named Regularized Incomplete RPCA (RIRPCA), was proposed. First, the model of RIRPCA was constructed based on a metric function for evaluating the continuity, where the model minimized a weighted combination of the matrix nuclear norm, L 1 norm and regularized term. Then, the augmented Lagrange multipliers algorithm was employed to solve the proposed convex optimization problem. This algorithm has good scalability and low computation complexity. Finally, RIRPCA was applied to the background modeling of videos. The experimental results demonstrate that the proposed method has the superiority of recovering missing entries and separating foreground over matrix completion and incomplete RPCA.
Reference | Related Articles | Metrics
Review on lightweight cryptography suitable for constrained devices
YANG Wei WAN Wunan CHEN Yun ZHANG Yantao
Journal of Computer Applications    2014, 34 (7): 1871-1877.   DOI: 10.11772/j.issn.1001-9081.2014.07.1871
Abstract632)      PDF (1113KB)(787)       Save

With the rapid development of the Internet of Things (IoT), security of constrained devices suffer a serious challenge. LightWeight Cryptography (LWC) as the main security measure of constrained devices is getting more and more attention of researchers. The recent advance in issues of lightweight cryptography such as design strategy, security and performance were reviewed. Firstly, design strategies and the key issues during the design were elaborated, and many aspects such as principle and implementation mechanisms of some typical and common lightweight cryptography were analyzed and discussed. Then not only the commonly used cryptanalysis methods were summarized but also the threat of side channel attacks and the issues should be noted when adding resistant mechanism were emphasized. Furthermore, detailed comparison and analysis of the existing lightweight cryptography from the perspective of the important indicators of the performance of lightweight cryptography were made, and the suitable environments of hardware-oriented and software-oriented lightweight cryptography were given. Finally, some unresolved difficult issues in the current and possible development direction in the future of lightweight cryptography research were pointed out. Considering characteristics of lightweight cryptography and its application environment, comprehensive assessment of security and performance will be the issues which worth depth researching in the future.

Reference | Related Articles | Metrics
Texture clustering matting algorithm
YANG Wei GAN Tao LAN Gang
Journal of Computer Applications    2013, 33 (11): 3213-3216.  
Abstract805)      PDF (660KB)(467)       Save
To solve the problem that traditional matting methods do not perform well in highly textured regions, a Texture Clustering Matting (TCM) algorithm based on K-Nearest Neighbor (KNN) matting was proposed. First, the texture features were extracted. Second, a new feature space which contained color, position and texture information was constructed. Third, the matting Laplacian matrix was constructed by clustering neighbors in the new feature space. Last, the opacity was solved by using the closed-form solution. The experiments on benchmark datasets indicate that the overall ranking of the proposed method is significantly improved, which achieves relatively leading matting effect for highly textured image.
Related Articles | Metrics
Algorithm for discovering influential nodes in weighted social networks
HAN Zhongming YUAN Liling YANG Weijie WAN Yueliang
Journal of Computer Applications    2013, 33 (06): 1553-1562.   DOI: 10.3724/SP.J.1087.2013.01553
Abstract726)      PDF (990KB)(813)       Save
Key nodes discovery is very important for social network. Nowadays, most of methods of key nodes discovery do not take relationship strength of nodes into account. Social networks, in essence, are weighted networks because relationship strengths of nodes are different. In this paper, a new method to compute relationship strength of nodes based on node interactions was proposed, and the method combined local features with global features. A node activity degree using user behavior features was defined; as a result, social networks were represented as dual-weighted networks by taking relationship strength as edge weight and node activity as node weight. Based on PageRank algorithm, two improved algorithms were proposed. The node weights were used as damping coefficient, and the weight of the edges was used to compute importance of nodes during iterative process. Two datasets from different sources were selected and comprehensive experiments were conducted. The experimental results show that proposed algorithms can effectively discover key nodes in real social networks.
Reference | Related Articles | Metrics
Information granularity in interval-valued intuitionistic fuzzy information systems
YANG Wei-ping LIN Meng-lei
Journal of Computer Applications    2012, 32 (06): 1657-1661.   DOI: 10.3724/SP.J.1087.2012.01657
Abstract708)      PDF (776KB)(542)       Save
Granular Computing is an emerging conceptual in information processing. It plays an important role in information processing for fuzzy, uncertainty. Interval-valued intuitionistic fuzzy information granularity is an important tool, to measure the uncertainty of interval-valued intuitionistic fuzzy information systems. Based on interval-valued intuitionistic fuzzy information systems, this paper constructs intersection, union, subtraction and complement four operators among granular structures, introduces three new partial order relations in interval-valued intuitionistic fuzzy information systems and establishes the relationships among them. It defines an interval-valued intuitionistic fuzzy information granularity and an axiomatic approach to interval-valued intuitionistic fuzzy information granularity in interval-valued intuitionistic fuzzy information systems. Finally, it investigates the properties of interval-valued intuitionistic fuzzy information granularity.
Related Articles | Metrics
Niederreiter public-key cryptosystem based on QC-LDPC
Lei-Xin YANG Wei-zhang DU
Journal of Computer Applications    2011, 31 (07): 1906-1908.   DOI: 10.3724/SP.J.1087.2011.01906
Abstract1095)      PDF (564KB)(910)       Save
A Niederreiter public-key cryptosystem based on QC-LDPC codes was proposed. As the check matrix of QC-LDPC codes is spare,who has the structure of circulative blocks and high error correction capability, compared with other public-key cryptosystem,the key sizes of the new cryptosystem is reduced and transmission rate is improved. A new parity-check matrix was mapped by invertible transformation matrix Q with diagonal form. The sparse characteristic of is countervailed. Through analyzing existing attacking methods,the security of the cryptosystem is confirmed to be improved.
Reference | Related Articles | Metrics
Carving surface modeling of images based on quadratic Bezier curved surface
Chen Yutuo Yang Weimin XIONG QiuBao HU Liang HAN Xuli
Journal of Computer Applications   
Abstract1393)      PDF (1531KB)(702)       Save
Carving surface was directly modeled using the quadratic Bezier curve surface according to the grey or color pixels information of the image with clear texture and simple construction. This method segmented the image, analyzed and computed the pixel values and their position parameters for the sub-images, and used those values as feature parameters for constructing the surface patches of the model. The modeling was implemented by limited smooth connection of surface patches. The established surface model can describe the basic structure characteristics of the original objects in the image, and satisfy the requirements of digital carving processing. It has low complexity and strong practicability.
Related Articles | Metrics
Risk analysis of information system based on variable precision rough set
XIAO Long,DAI Zong-kun,YANG Wei
Journal of Computer Applications    2005, 25 (07): 1595-1597.   DOI: 10.3724/SP.J.1087.2005.01595
Abstract1100)      PDF (454KB)(678)       Save

The risk analysis method based on variable precision rough set (VPRS) was proposed. It used reduct functions and filtered risk regulations and combined quantitative measurement with qualitative analysis. So it not only reduced greatly the data, but also increased the validity of risk regulations. At last, we analyzed the data in the reference [4] and mined the risk regulations effectively among them by using this method.

Reference | Related Articles | Metrics
View-dependent geometry compression and transmission algorithmfor 3-D graphics
YANG Wei
Journal of Computer Applications    2005, 25 (03): 601-602.   DOI: 10.3724/SP.J.1087.2005.0601
Abstract920)      PDF (118KB)(972)       Save

A geometry compression algorithm for 3-D graphics was presented. It divided 3-D graphics into many layers, created BSP tree in the receiver, and extracted the view-dependent layers and visible triangulars information. This algorithm can greatly reduce the transmitted data of 3-D graphics without affecting the process and display of sensitive data.

Related Articles | Metrics
Trajectory-based multipath tolerant routing for sensor networks
YANG Wei-feng,PENG Zhao-yi,SUN Xing-ming
Journal of Computer Applications    2005, 25 (03): 506-510.   DOI: 10.3724/SP.J.1087.2005.0506
Abstract1288)      PDF (258KB)(843)       Save
In order to improve the tolerance of routing algorithms, this paper proposed a new algorithm based on trajectory to construct multipath. Its basic idea is that the source node selects several suitable curves to sink at one time and intermediate nodes build dynamic forwarding tables according to different greedy forwarding strategies. Compared with other algorithms constructing multipath, this new algorithm only needs information in one hop and doesn’t need too much calculation, so it is distributed and can be realized easily. Theoretic analysis and simulation results show that it is more tolerant.
Related Articles | Metrics